Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

 Grafana  Loki 

Grafana Loki - Central log collection

Daniel Nashed – 3 February 2026 01:25:55

Grafana Loki is pretty cool. I looked into it earlier and just revisited it this week.

It's a bit comparable to Splunk. But completely open source and part of the Grafana family.
Loki is well integrated into Grafana and can be a data source for dashboards.


https://grafana.com/oss/loki/

There two different client components to collect logs.

Prometheus Promtail is the older component. The newer component is Grafana Alloy, which also can collect other logs.
For K8s metrics end-points Prometheus Node Exporter is still the tool of choice.


I have just added Grafana Alloy to the container project. It's a simple install option -alloy and needs a single environment variable to configure.


ALLOY_PUSH_TARGET=
http://grafana.example.com:3100/loki/api/v1/push

On the server side you just need a Loki instance, which Grafana leverages as a data source.

The Domino Grafana project docker-compose stack is prepared for Loki.




Image:Grafana Loki - Central log collection
 USB  NVMe 

USB and SSD performance compared

Daniel Nashed – 1 February 2026 21:20:27

For a workshop I am looking into what type of USB sticks we can want to use to boot up Ubuntu from a stick.

Prices and performance differs and there is orders of magnitude difference between them.
After testing I just ordered a couple of additional USB sticks for more testing.

But the list below shows the difference quite well.

There is a easy to use Windows tool to quickly test the performance "winsat".

  • It's pretty clear that a USB 2.0 is very slow. That's not just because of the USB 2.0 standard. This also greatly depends on the chips used.
  • Good USB 3.0 hardware is dramatically better performing.
  • An older Samsung NVMe is pretty good.
  • A current NVMe plays in a different league

I might write up another


winsat disk -drive E



--- Old USB 2.0 stick ---


Very slow even for copying data



Disk  Random 16.0 Read                          8.97 MB/s
Disk  Sequential 64.0 Read                     17.79 MB/s
Disk  Sequential 64.0 Write                     5.98 MB/s
Average Read Time with Sequential Writes       12.861 ms
Latency: 95th Percentile                      596.072 ms
Latency: Maximum                             1170.906 ms
Average Read Time with Random Writes          173.835 ms
Total Run Time 00:08:10.41



--- Current Samsung FIT stick ---


Quite good from transfer rates and latency



Disk  Random 16.0 Read                        53.69 MB/s
Disk  Sequential 64.0 Read                   147.78 MB/s
Disk  Sequential 64.0 Write                   58.54 MB/s
Average Read Time with Sequential Writes       3.451 ms
Latency: 95th Percentile                       5.336 ms
Latency: Maximum                              11.383 ms
Average Read Time with Random Writes           3.572 ms
Total Run Time 00:00:44.84



--- NVMe internal disk on an older notebook
 ---

I would have expected better read performance.

But the latency is dramatically better than for an USB stick!



Disk  Random 16.0 Read                       321.10 MB/s
Disk  Sequential 64.0 Read                   433.78 MB/s
Disk  Sequential 64.0 Write                   97.33 MB/s
Average Read Time with Sequential Writes       0.620 ms
Latency: 95th Percentile                       1.839 ms
Latency: Maximum                              14.415 ms
Average Read Time with Random Writes           0.591 ms
Total Run Time 00:00:40.08



--- NVMe internal disk on my new notebook
 ---

Dramatic increase in read and write performance.

Another 10 times better latency as well!



Disk  Random 16.0 Read                       1508.35 MB/s
Disk  Sequential 64.0 Read                   4414.35 MB/s
Disk  Sequential 64.0 Write                  1138.50 MB/s
Average Read Time with Sequential Writes        0.081 ms
Latency: 95th Percentile                        0.152 ms
Latency: Maximum                                1.208 ms
Average Read Time with Random Writes            0.084 ms
Total Run Time 00:00:07.33

 Domino  Linux  OTS 

Domino on Linux OTS changes for Domino 14+

Daniel Nashed – 1 February 2026 20:57:54

The Domino start script and the container has build-in OTS support.
There are OTS files for first server and additional server setup, which contain the SERVERSETUP_XXX variables as placeholders you get prompted for at setup.

Domino 14.0 introduced new OTS functionality, which make OTS setups more flexible --> https://github.com/HCL-TECH-SOFTWARE/domino-one-touch-setup?tab=readme-ov-file#document-operations

Find document operations support formulas since Domino 14.0 which makes lookups more straightforward. Else server names would need to exactly match.
That's problematic with servers with C= and OU= names.
Also the find operation with a formula is much more flexible in general.

Separating OTS configurations for Domino 14.0+ and older versions will allow to add more specific configuration.

There are additional options to use in Domino 14.0+ which we might want to use over time.
In addition some configuration settings are not needed on Domino 14+ -- For example iNotes redirect databases.

Below is how this is planned to look like.

The menu structure is extensible and I just added new JSON entries and new OTS JSON files.
I also removed the default environment variables for additional servers, because they never match existing values.

The env file for additional servers did not make much sense. Those variables are all depending on your environment and need to be typed in anyway.
For a first server it can make sense to quickly test a setup and have an example for each parameter.


Because some settings will not be good for larger servers, I added a way to specify an "info" in the menu configuration file.
And I added an info to all of the OTS configurations as you see below.

I am submitting it to the develop branch of the Domino container image and the start script.

dominoctl setup


[1] Domino 14+ First server JSON
[2] Domino 14+ Additional server JSON
[3] Domino 12.x

Select [1-3] 0 to cancel? 1


Info: Small server config - For a production server review tuning like: NSF_BUFFER_POOL_SIZE_MB=1024

SERVER_NAME: my-domino-server


 DKIM 

DKIM keys with RSA 2048 are now recommended

Daniel Nashed – 1 February 2026 18:33:57

There are two type of DKIM keys you can use:


  • RSA keys are the classical key type everyone supports
  • Ed25519 keys are based on elliptic curve crypto and are much shorter length with better key strength.
     

Not every server supports Ed25519 keys yet. To ensure best compatibility.

You either have to stay with RSA keys or use dual key signing with an Ed25519 key and a RSA key.

Domino DKIM supports both key types and I am running dual keys.


Earlier the best practice was to use a RSA 1024 key as long it was sufficient strong.
Now some providers require RSA 2048 keys to be full compliant.



Why are RSA 2048 keys are a challenge


The maximum length for a DNS entry is 255 bytes. A RSA 1024 key and an Ed25519 key fit into a single entry.

But a RSA 2048 key needs to be split into multiple parts.


This is usually not a big deal -- but depends on your DNS provider interface.


  1. DNS TXT records need to be enclosed in quotes
     
  2. When splitting the DNS TXT record each part needs to be quoted on it's own.


How it looks like and how to query DKIM text records



nslookup -type=txt rsa20260201._domainkey.nashcom.de


rsa20260201._domainkey.nashcom.de       text = "v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzr/zFXV9H1HSC54U9qxSPsRNs/bngeNqJfTe8mV058hPnBPp5m2CBfAZZUHvQ1gB7pic5nUJ5rX7NuSWFB/W+9kf0UG92dLWKseUT6h7QoNIUlz0bOnNV1aji62ZUWEf1wL6iwLmbHwLYO0l8wUreoWtvwNpsnJqeW5YxSBNEHPW8EWFFtBkQ29m0xlToVJU1"
"mm9Hexn9LLkDQko90naiFxkeZy84vTixmv8xIMQVlKxZi3Arwz/xdUrGPfFwQI6Uu3IMjKzHrlOeZA5tmqBdLRwvFisAuiCY2UudkJrRt0xPjC/tHCcYcKYjLcJaFa9YWHTG8aqeeg4ApVYcyZEPQIDAQAB;"


One DNS TXT Records with multiple parts


On first sight it might look like multiple records. But it is really one record with multiple parts.


Some DNS GUIs support just pasting a single quoted string and chunk it on their own.


But in many DNS GUIs you have to create the chunks on your own and add one DNS entries with those multiple quoted parts.
Using the Hetzner DNS interface you just specify multiple strings. The maximum length of each part without quotes is 255 bytes exactly as shown in my example above.


I added logic to the Hetzner DNS TXT API to split the record. The Hetzner API expects one entry with multiple strings like this:


{"name":"rsa20260201._domainkey","type":"TXT","ttl":60,"records":[{"value": "\"v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzr/zFXV9H1HSC54U9qxSPsRNs/bngeNqJfTe8mV058hPnBPp5m2CBfAZZUHvQ1gB7pic5nUJ5rX7NuSWFB/W+9kf0UG92dLWKseUT6h7QoNIUlz0bOnNV1aji62ZUWEf1wL6iwLmbHwLYO0l8wUreoWtvwNpsnJqeW5YxSBNEHPW8EWFFtBkQ29m0xlToVJU1\" \"mm9Hexn9LLkDQko90naiFxkeZy84vTixmv8xIMQVlKxZi3Arwz/xdUrGPfFwQI6Uu3IMjKzHrlOeZA5tmqBdLRwvFisAuiCY2UudkJrRt0xPjC/tHCcYcKYjLcJaFa9YWHTG8aqeeg4ApVYcyZEPQIDAQAB;\"","comment":"Created by Domino CertMgr"}]}




Image:DKIM keys with RSA 2048 are now recommended

Intel MacBook from 2008 runs Ubuntu 24.04 LTS with Domino on Docker

Daniel Nashed – 1 February 2026 23:20:30

I have an old Intel Mac. Apple MacBook (Late 2008) — the first unibody MacBook.

MacOS support is long expired and the hardware isn't great from today's view.
But it runs Ubuntu perfectly. I just booted it from an USB stick -- haha even it has a DVD drive.

The CPU is an Intel Core 2 Duo P7350
  • 2 cores / 2 threads
  • 2.0 GHz
  • 64-bit x86_64
  • ISA level: x86-64-v1
     
The machine only has 4 GB of RAM but booted from a USB 2.0 stick the Domino Docker build took just 20 Minutes! That was quicker than expected!

x86-64-v1
is a bit of  problem with some applications.
Current Redhat versions don't run -- Also in a container.

But Ubuntu works just fine.It just takes a while to install.


Watching a container built shows what needs more current CPUs to perform well.

Some software is built with -march=x86-64-v2 is not working


Even Domino isn't compiled this way I got only Domino 12.0.2 working.

Domino 14.x returns and error:


/opt/hcl/domino/notes/latest/linux/server: CPU ISA level is lower than required


The machine isn't really intended to run Domino and making use of the older hardware is mostly to learn what breaks and what still works.

And it could be a simple machine running Ubuntu. Not Domino. I have sufficient other hardware to run Domino.

Also the NVIDIA card is an older architecture which will not work for AI applications.


Running with ZFS would not be a good idea either. It needs RAM and it would need hardware SHA acceleration to be quick.
That's why I didn't try to install it with ZFS. But ext4 works just fine. The USB stick had ZFS.

USB 2.0 is pretty slow. The local HDD provides better performance.


The machine is still working well with Linux in general as long it's Ubuntu. The drivers all work including WLAN if you load the right driver.



--------------------------------------------------------------------------------------------


Hostname      :      nsh-MacBook

Linux OS      :      Ubuntu 24.04.3 LTS

Linux Version :      24.04.3 LTS (Noble Numbat)

Kernel        :      6.14.0-37-generic

GNU libc      :      2.39

Timezone      :      CET +0100

Locale        :      en_US.UTF-8


Virt          :      none

Docker        :      29.2.0

DomDownload   :      1.1.2

DominoCtl     :      1.5.3


CPU Info      :      2 Intel(R) Core(TM)2 Duo CPU P7350 @ 2.00GHz

CPU MHz       :      1989.957

CPU Cache     :      3072 KB


Linux Uptime  :      0 day, 0 hour 24 min

Load Average  :      2.11  2.07  1.48


MemTotal      :       3.6 GB

MemAvailable  :       2.3 GB  (64.2 %)

MemCached     :       2.2 GB  (62.3 %)

MemFree       :       0.1 GB  ( 4.2 %)



Type       Size     Used    Avail  Use%   FsType   Disk                 Mounted on


Root       292G      19G     259G    7%   ext4     /dev/sda2            /

Local      292G      19G     259G    7%   ext4     /dev/sda2            /


--------------------------------------------------------------------------------------------


 Docker 

How to bring back Docker image create time for Docker images list -- why do vendors always know better what changes customers need?

Daniel Nashed – 30 January 2026 18:46:16

I don't know why companies just decide which improvements are helpful for their customers and not providing a way to get back what was there before.
This looks like a trend. I don't see this just at Docker. But this is a good example.

Other examples is Apple with the new version 26. Some of the changes are really breaking the way users worked with their iPhone or iPad before.
The Facetime app is a good example, but also changes in the images app. Don't get me started.

When changing functionality users are used to, it could make sense to provide a setting to go back to the previous functionality.

In case of the "docker images" command there is no direct way to go back.
You can write your own script or alias with a more complex command line behind it.

I just added a new option to my dominoctl which I use with the alias "d" usually. So on my machines I can quickly list images in the way I want it.

The new default I set should be what you need in most cases and are a -- I would think -- practical output format, which should be OK for most admins.


But I also introduced a new dominoctl variable to specify your own format.


If not overwritten by CONTAINER_IMAGES_LIST_FORMAT the script uses the following command.



docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.ID}}\t{{.Size}}\t{{.CreatedSince}}\t{{.Repository}}:{{.Tag}}"



Here is how the new output looks like. I am missing the created time date, which is often useful to see the age of the image.

Specially to spot newly created Domino container images.


docker images


IMAGE                                     ID             DISK USAGE   CONTENT SIZE   EXTRA

domino-nrpc-sni:latest                    7110286a46c4       32.4MB             0B

grafana/grafana-enterprise:latest         7ddf287666f9        800MB             0B

hclcom/domino:14.5.1EA2                   13568611b681       2.12GB             0B

hclcom/domino:14.5FP1                     5f97041c031e       2.16GB             0B

hclcom/domino:latest                      13568611b681       2.12GB             0B

mariadb:11                                bfe9184ea9e5        334MB             0B

nashcom/dominodownload:latest             2e1568f167b3       92.8MB             0B

nginx:latest                              60adc2e137e7        152MB             0B

prom/prometheus:latest                    20a11eec2fec        378MB             0B

registry.access.redhat.com/ubi10:latest   df13daf31556        212MB             0B



There is a --no-trunc option which gives you the full information including the full hash and the created time.
But mostly the full ID (SHA256) is almost never needed.



docker images --no-trunc


REPOSITORY                         TAG         IMAGE ID                                                                  CREATED          SIZE

hclcom/domino                      14.5.1EA2   sha256:13568611b68116590614675779b6e9489526cb1bd795da1f0052ac67e351c25c   25 minutes ago   2.12GB

hclcom/domino                      latest      sha256:13568611b68116590614675779b6e9489526cb1bd795da1f0052ac67e351c25c   25 minutes ago   2.12GB

hclcom/domino                      14.5FP1     sha256:5f97041c031e1b74dd12ce03ff8fc6635911b30693823428f2ea02cceabac5f2   49 minutes ago   2.16GB

mariadb                            11          sha256:bfe9184ea9e5b839e6b01d279a2e508a3bb50e66c843dfd2183dd7608550ce78   2 weeks ago      334MB

nashcom/dominodownload             latest      sha256:2e1568f167b3b261a484aa233bdb725b7be63119c59a501f61cc2676cad31fad   3 weeks ago      92.8MB

grafana/grafana-enterprise         latest      sha256:7ddf287666f9a2a9bcfad24d1e916f2c5b3e967101e06063904086b2465b661e   6 weeks ago      800MB

registry.access.redhat.com/ubi10   latest      sha256:df13daf315567aaa5c6e3045b9dc83cd99c670d64859eac902798d7a64187e75   6 weeks ago      212MB

prom/prometheus                    latest      sha256:20a11eec2fec912184d2acaf1c3052ee163919feb3c8a0217fa6607b686b9b4c   6 weeks ago      378MB

nginx                              latest      sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5   2 months ago     152MB

domino-nrpc-sni                    latest      sha256:7110286a46c4ec5d51b6ff40bc1d24a49755add3a5b59696b94d2c68d3cd8f80   3 months ago     32.4MB



Here is the format I came up with.

It shows the size before the created and also shows the repository:tag. That's often useful for copy and paste.


dominoctl images



REPOSITORY                         TAG         IMAGE ID       SIZE      CREATED          REPOSITORY:TAG

hclcom/domino                      14.5.1EA2   13568611b681   2.12GB    26 minutes ago   hclcom/domino:14.5.1EA2

hclcom/domino                      latest      13568611b681   2.12GB    26 minutes ago   hclcom/domino:latest

hclcom/domino                      14.5FP1     5f97041c031e   2.16GB    50 minutes ago   hclcom/domino:14.5FP1

mariadb                            11          bfe9184ea9e5   334MB     2 weeks ago      mariadb:11

nashcom/dominodownload             latest      2e1568f167b3   92.8MB    3 weeks ago      nashcom/dominodownload:latest

grafana/grafana-enterprise         latest      7ddf287666f9   800MB     6 weeks ago      grafana/grafana-enterprise:latest

registry.access.redhat.com/ubi10   latest      df13daf31556   212MB     6 weeks ago      registry.access.redhat.com/ubi10:latest

prom/prometheus                    latest      20a11eec2fec   378MB     6 weeks ago      prom/prometheus:latest

nginx                              latest      60adc2e137e7   152MB     2 months ago     nginx:latest

domino-nrpc-sni                    latest      7110286a46c4   32.4MB    3 months ago     domino-nrpc-sni:latest



In this case it was easy to address. And now there is more flexibility and choice.


I would wish vendors would take more care when changing functionality. and offer choices when they are not sure if the changed behavior is good for everyone.

Also they could ask their community. Like Docker Captains and HCL Ambassadors for example before updating something customers might rely on.



 Blog  MD 

Markdown as a Way Out for the Domino Blog Template?

Daniel Nashed – 28 January 2026 20:08:53

It’s 2026, and the Domino blog template is still not something I enjoy using.
Every time I post something with more complex formatting, I end up reformatting it after submission — sometimes more than once — just to get it into a decent shape.


This isn’t a simple CSS problem. The real issue is how the blog template processes and rewrites the content you add to a post.

Cleaning this up is not trivial. The template logic would need significant changes, and even then we would still be limited by Notes’ rather dated HTML and CSS support.



I don’t want to move my blog to a different system. Over the years it has accumulated a lot of useful information — for others, but also for myself.
(By the way, the main posts without comments can be replicated from home.openntf.net/openntf-net via NRPC without cross-certification).

So instead of trying to fix the blog template itself, I started thinking about a different approach.


A Different Idea: Markdown as the Source



What about extending the blog template with support for Markdown?

The idea would be simple:


  • The input is Markdown
  • Markdown is converted to HTML
  • Both Markdown and HTML are stored in the document in parallel


The converted HTML would not necessarily render perfectly in the Notes client, but it would render exactly as intended when served to the web. Notes would just need a simpler representation.
I already have code that converts Markdown to HTML, and I have tested storing both formats using multipart MIME in body fields. This works well and still allows re-editing later.


Notes vs. Web Rendering



Today, I already have to restrict myself to very basic CSS if I want content to look acceptable in the Notes client.

With a Markdown-based approach, we could clearly separate the two worlds:


  • A simple CSS for rendering in the Notes client
  • A modern CSS (for example GitHub’s Markdown CSS) for web browsers


The content stays the same. Only the presentation changes.


Beyond the Blog Template



This approach may actually be useful beyond blogging.
I started a database some time ago that stores Markdown converted to HTML using multipart MIME. Notes clients can render this content natively, and the Markdown source can still be edited later.

This would also work for regular Notes documents. All that’s really needed is:


  • Edit in a separate database
  • Store it back into the currently open document in MIME with MD as text and HTML
  • Have eventually two different versions. Notes needs the CSS directly in the MIME part.
     

Here is a screen print of what I have already working.
Note: In this database “Show Page Source” shows the HTML



How do you like the idea?


Image:Markdown as a Way Out for the Domino Blog Template?
 DAOS  Domino 

Large-Scale Domino Storage Considerations (Last blog post questions reply)

Daniel Nashed – 28 January 2026 19:36:03
This post is a response to a detailed question about large-scale Domino deployments from Mike.

It is not possible to cover every aspect of a large deployment in a single post. At larger scales, the details matter, and careful planning is required.
The goal here is to highlight considerations that are useful for both smaller and larger environments.

A Domino server with 8 TB data, storage design requires careful planning.


ZFS and TrueNAS SCALE



ZFS is a powerful and highly scalable file system with a lot of flexibility.
At the same time, it must be handled with care, especially at larger volumes.

TrueNAS SCALE is a solid storage appliance and will scale to the limits of the underlying hardware.



Depending on the final design, ZFS usually requires:

  • A mix of standard disks and high-speed SSDs
  • SSDs for cache and metadata
  • Sufficient RAM


Cache design and sizing are separate topics and not covered here.


Compression



Compression is only effective if the data can actually be compressed.
  • Attachments are often already compressed
  • Attachments may already be stored using LZ1 compression
  • This reduces the effectiveness of filesystem-level compression

Notes application data usually compresses well.

In most cases, the additional CPU cost is a good trade-off for reduced I/O.



Deduplication


Deduplication is mainly useful for:

  • Backups
  • Shared DAOS stores used by multiple servers

Some customers operate many terabytes of DAOS data on storage appliances where deduplication provides significant savings.



Deduplication and Encryption


Deduplication and encryption do not work together.

This is true even if:
Multiple servers share the same DAOS encryption keys Encrypted data will not deduplicate.

DAOS Encryption


DAOS uses a hybrid encryption model:

  • A public/private DAOS key pair exists per server
  • A new symmetric key is generated for each NLO (attachment)
  • The attachment data is encrypted with the symmetric key
  • The symmetric key is encrypted with the public DAOS key

This is a standard encryption approach and is also used for Notes document encryption and S/MIME.

Because a new symmetric key is generated for each NLO, the same attachment encrypted with the same DAOS key will always result in different encrypted data.



As a result

  • DAOS-encrypted data cannot be deduplicated
  • This applies across servers and in backups


When DAOS Encryption Is Useful



DAOS encryption is enabled by default to avoid security concerns around DAOS.

In practice, extracting data from an NSF file is usually easier than extracting data from an NLO.


DAOS encryption is mainly useful when:

  • NSF files are also encrypted
  • The server ID has a password
  • The DAOS store is located outside the Domino server (for example in a cloud environment)

If data is highly sensitive, end-to-end encryption using user public keys is required. In that case, the data is not decryptable by administrators or servers.

DAOS encryption is not useless, but in many environments it is not strictly required.
Deduplication benefits for shared DAOS stores or backups can be significant.


ZFS Record Size



The default ZFS record size is 128 KB.
This is suitable for:
  • DAOS data
  • Backup data


Domino databases perform better with smaller record sizes.



Recommended Record Sizes

  • NSF / NIF / FT: 16 KB preferred, 32 KB maximum
  • Transaction log: 32 KB or smaller
  • DAOS and backup data: 128 KB
    (64 KB may improve deduplication but increases RAM usage)


Important Note



If the Domino parameter Create_R85_Log=1 is enabled and the transaction log record size exceeds 32 KB, Domino will crash.

Like most database systems, Domino generally performs best with smaller block sizes.

16K would be the recommended record size.


Additional optimization



Also moving out FT and NIFNSF to different file-systems are helpful.

This would be specially important for snapshot backups. But it would also greatly reduce the file-system size of your server.
You could put both on the same file-system with different folders.



Platform Notes



For large and modern deployments, Linux is usually the most scalable and best tunable platform.
OS/400 (IBM i / iSeries) is fundamentally different from other platforms and does not receive all modern Domino features. Migration to Linux should be considered where possible.
Windows may still be required in some environments, but where there are no specific dependencies, Linux is often the preferred choice from security and TCO point of view.



HCL Traveler HA with MSSQL and custom CA in a container end to end

Daniel Nashed – 28 January 2026 18:10:48

The Domino container image supports Microsoft SQL server, MySQL and also PosgreSQL - which is available since Domino 14.5.1 EA.
The Traveler team also changed the travelerUtil script to work without root permissions and you run it with the "notes" user now.
This aligns the security model with other parts of Domino and helps specially in container environments where root shells are not available because the container itself (e.g, on K8s) uses the notes user (uid:gid -> 1000:1000)


Yesterday I added symbolic link support for the JDBC drivers so that a container update with a later JDBC driver does not need a change for the Traveler configuration.

I have tested the new configuration in two ways. First of all wit hthe Microsoft JDBC driver.
But also with the new PosgreSQL database support. For PosgreSQL I used a simple docker container on the same local Docker host:


./travelerUtil db set url=jdbc:postgresql://127.0.0.1:5432/traveler_ha path=/opt/hcl/domino/notes/latest/linux/Traveler/lib/postgresql-jdbc.jar user=traveler pw=traveler-ha-password



PosgreSQL would not offload TLS and would need a certificate where rotating it would need a database restart.
But you might get away with a 1 year certificate lifetime and start to rotate it after 6 month and pick up the update on each maintenance.


It's similar with Microsoft SQL server as a customer's DBA explained to me when configuring it today with TLS.



./travelerUtil db set "url=jdbc:sqlserver://mssql.example.com:1433;encrypt=true;databaseName=TRAVELER" user=traveler_ha 'pw=traveler-admin-secure-password' path="/opt/hcl/domino/notes/latest/linux/Traveler/lib/mssql-jdbc.jar"



Importing trusted roots as container build time



In our case the certificate as a corporate CA. This means the root certificate wasn't in the Linux nor in the Domino Java key store (where Traveler would expect it).

But there is a build option for the container image to add your own CA to the Linux and/or Domino Java key store.
For Domino itself you can just add the trusted root to certstore.nsf and get it deployed domain wide.


But for Traveler it really needs to be in the Java keystore.


See this documentation for details -->
https://opensource.hcltechsw.com/domino-container/reference_custom_roots/

In our case we just dropped the PEM with the right name into the custom directory at build time.
The build process knows how to add the root to the JVM keystore and the Linux trust store (which can differ between distributions).


 RAG 

If you’re interested in RAG, you should watch the OpenRAG Submit video

Daniel Nashed – 27 January 2026 20:53:01

Niklas Heidloff published a short blog post that pointed me to it
https://heidloff.net/article/openrag/
That post was the trigger to look at this excellent OpenRAG Submit video.



OpenRAG Submit

https://www.youtube.com/watch?v=y3Tr1i_ynvI


Even if you never plan to use OpenRAG, this video is absolutely worth to watch.
What makes it stand out is that it explains RAG by breaking it down into individual components, instead of treating it as a black box. It focuses on architecture, responsibilities, and trade-offs rather than on a single framework or product.

This is not just an OpenRAG video — it’s one of the clearest explanations of modern RAG architectures I’ve seen so far.

The speakers are clearly experts in their fields, but just as importantly, they are excellent communicators. The presentation is structured, easy to follow, and genuinely enjoyable to watch.
And yes — I  love the Run MCP T-shirt.


My Takeaways from the video

 
  • Many tools used in AI, RAG, and related components are open source
    That makes perfect sense. Openness, transparency, and extensibility are essential in this space.
    This also helps to accelerate development in the AI space


  • RAG does not automatically mean “vector database”
    Vector databases are useful, but they are not mandatory. The video does a great job of explaining alternative approaches and when they make sense.


  • MCP servers are an important part of the stack
    They play a key role in how components communicate and integrate.


  • Data ingestion and tokenization really matter
    How data is ingested — and how it is tokenized — has a major impact on results. Tokenization strategies change over time and should be flexible and pluggable.


  • Everything should be pluggable
    All components in a RAG architecture should be replaceable. One of the most important emerging standards enabling this is MCP.


Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]